Goto

Collaborating Authors

 mixture component


MixLasso: Generalized Mixed Regression via Convex Atomic-Norm Regularization

Neural Information Processing Systems

We consider a generalization of mixed regression where the response is an additive combination of several mixture components. Standard mixed regression is a special case where each response is generated from exactly one component. Typical approaches to the mixture regression problem employ local search methods such as Expectation Maximization (EM) that are prone to spurious local optima. On the other hand, a number of recent theoretically-motivated \emph{Tensor-based methods} either have high sample complexity, or require the knowledge of the input distribution, which is not available in most of practical situations. In this work, we study a novel convex estimator \emph{MixLasso} for the estimation of generalized mixed regression, based on an atomic norm specifically constructed to regularize the number of mixture components. Our algorithm gives a risk bound that trades off between prediction accuracy and model sparsity without imposing stringent assumptions on the input/output distribution, and can be easily adapted to the case of non-linear functions. In our numerical experiments on mixtures of linear as well as nonlinear regressions, the proposed method yields high-quality solutions in a wider range of settings than existing approaches.


The Learnability of In-Context Learning

Neural Information Processing Systems

Our theoretical analysis reveals that in this setting, in-context learning is more about identifying the task than about learning it, a result which is in line with a series of recent empirical findings.


Alleviating Label Switching with Optimal Transport

Pierre Monteiller, Sebastian Claici, Edward Chien, Farzaneh Mirzazadeh, Justin M. Solomon, Mikhail Yurochkin

Neural Information Processing Systems

Sampling and inference algorithms behave poorly as the number of modes increases, andthisproblem isonlyexacerbated inthiscontextsinceincreasing thenumber ofcomponents in the mixture model leads to a super-exponential increase in the number of modes of the posterior.



Benefits of over-parameterization with EM

Ji Xu, Daniel J. Hsu, Arian Maleki

Neural Information Processing Systems

Expectation Maximization (EM) is among the most popular algorithms for maximum likelihood estimation, but it is generally only guaranteed to find its stationary points of the log-likelihood objective. The goal of this article is to present theoretical and empirical evidence that over-parameterization can help EM avoid spurious local optima in the log-likelihood. We consider the problem of estimating the mean vectors of a Gaussian mixture model in a scenario where the mixing weights are known. Our study shows that the global behavior of EM, when one uses an over-parameterized model in which the mixing weights are treated as unknown, is better than that when one uses the (correct) model with the mixing weights fixed to the known values. For symmetric Gaussians mixtures with two components, we prove that introducing the (statistically redundant) weight parameters enables EM to find the global maximizer of the log-likelihood starting from almost any initial mean parameters, whereas EM without this over-parameterization may very often fail. For other Gaussian mixtures, we provide empirical evidence that shows similar behavior. Our results corroborate the value of over-parameterization in solving non-convex optimization problems, previously observed in other domains.


DeepCounterfactualEstimationwithCategorical BackgroundVariables

Neural Information Processing Systems

Typically,givenanindividual,atreatmentassignment,andatreatmentoutcome, the counterfactual question asks what would have happened to that individual, had it been given anothertreatment,everythingelsebeingequal.


MixLasso: Generalized Mixed Regression via Convex Atomic-Norm Regularization

Ian En-Hsu Yen, Wei-Cheng Lee, Kai Zhong, Sung-En Chang, Pradeep K. Ravikumar, Shou-De Lin

Neural Information Processing Systems

TheMixedRegression(MR)problem considers theestimation ofK functions fromacollection of input-output samples, where for each sample, the output is generated by one of theK regression functions. When fitting linear functions in a noiseless setting, this is equivalent to solvingK linear systems, while at the same time, identifying which system each equation belongs to. The MR formulation can be employed as an approach to decompose a complicated function intoK simpler ones, by splitting the observations intoK classes.


AGaussianProcess-BayesianBernoulliMixtureModel forMulti-LabelActiveLearning

Neural Information Processing Systems

However, data annotation for training MLC models becomes much more labor-intensive due to the correlated (hence non-exclusive) labels and a potentially large and sparse label space.



SupplementaryMaterialsforExemplarVAE: LinkingGenerativeModels,NearestNeighbor Retrieval,andDataAugmentation

Neural Information Processing Systems

This metric computes the variance of the mean of the latent encoding of the data points in each dimension of the latent space,Var(µφ(x)i), wherexis sampledfromthedataset. For hierarchical architectures the reported number is for thez2 which is the highest stochasticlayer. Toregularize the Exemplar VAE, we used leave-one-out and exemplar sub-sampling. That is why did not compare directly against a mixture model prior in the primary experimental section. Three different architectures are used in the experiments, described below.